International Journal of Information Technology and Computer Science (IJITCS)

ISSN: 2074-9007 (Print)

ISSN: 2074-9015 (Online)

DOI: https://doi.org/10.5815/ijitcs

Website: https://www.mecs-press.org/ijitcs

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 142

(IJITCS) in Google Scholar Citations / h5-index

IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.

 

IJITCS has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJITCS Vol. 18, No. 2, Apr. 2026

REGULAR PAPERS

A Study of Using Convolutional Neural Networks in Fractal Dimension Estimation of Grayscale and Color Images

By Moheb R. Girgis Al Hussien Seddik Saad Mohammed M. Talaat

DOI: https://doi.org/10.5815/ijitcs.2026.02.01, Pub. Date: 8 Apr. 2026

Fractal dimension (FD) estimation is widely used to characterize image complexity and self-similarity in image analysis and texture characterization. Traditional FD estimators such as Box-Counting (BC) and Differential Box-Counting (DBC) are simple and efficient but can be sensitive to scale selection, resolution, and noise. This paper investigates the effectiveness of using convolutional neural network (CNN) in FD estimation compared to traditional methods. To this end, we have developed a CNN-based method for FD estimation under a fair and reproducible evaluation design. First, we have included an analytic-fractals benchmark (such as Sierpinski and Koch families) with closed-form FD values for independent evaluation. Second, for large-scale Julia/Mandelbrot images, FD labels are treated as reference estimates computed from multiple BC/DBC parameter settings and reported as mean ± standard deviation to quantify label uncertainty. We additionally assess behavior on an external natural-texture dataset and evaluate robustness under controlled degradations (noise, blur, compression, and downsampling). Performance is reported on large test sets using MAE/RMSE with 95% confidence intervals (bootstrap), together with per-image inference time under clearly specified hardware settings. Results indicate that the proposed CNN-based method provides stable FD estimation and fast inference, particularly under noise and resolution variations. 

[...] Read more.
The Role of Artificial Intelligence in the Career Expectations of Ukrainian Students: Implications for Higher Education

By Olena Semenikhina Marina Drushlyak

DOI: https://doi.org/10.5815/ijitcs.2026.02.02, Pub. Date: 8 Apr. 2026

The article presents the results of an empirical study on the attitudes of students at Ukrainian higher education institutions toward the role of artificial intelligence (AI), particularly ChatGPT, in the context of their future professional careers. The aim of this study is to determine whether students perceive ChatGPT (a generative AI tool) as a threat, an opportunity, or a multidimensional phenomenon that requires critical evaluation. The research methodology included the construction of two composite indices. These were the ChatGPT Opportunities Index and the ChatGPT Threats Index, both related to career development. The indices were based on responses from 354 students. All participants took part in the international "Global ChatGPT Student Survey". Data analysis employed descriptive statistics, analysis of variance (ANOVA), correlation analysis, clustering, and the χ² test. The results showed that the ChatGPT Opportunities Index was moderately higher than the ChatGPT Threats Index. This indicates a predominantly cautious optimism in students’ attitudes toward AI. At the same time, statistical analysis did not reveal any significant relationships between these indices and such variables as level of education, gender, or confidence in future employment. Cluster analysis identified three types of student attitudes: Realists, Reflective Optimists, and Disengaged. A synthesis of the results indicates that students show both interest in ChatGPT and a need for support from educational institutions in developing critical interaction skills with intelligent technologies. The study concludes that there is a need to integrate AI literacy into academic programs. It also highlights the importance of developing interdisciplinary training models and implementing educational interventions that foster adaptability and digital resilience among students.

[...] Read more.
Underwater Image Dehazing: A Comprehensive Approach

By Sumalatha A. Aruna S. K.

DOI: https://doi.org/10.5815/ijitcs.2026.02.03, Pub. Date: 8 Apr. 2026

Underwater imaging in recent times has advanced by trying to correct color distortion, increase contrast, and increase image clarity if the light need is less. The use of deep learning has been effective in enhancing image quality, but challenges persist in the decompression process due to data inconsistencies. In order to do this a new scheme is proposed in this study. Unlike other methods which depend only on the single images captured, here an attempt is made to use images taken in other conditions to overcome this limitation, by using the model to try and improve such underwater images in general irrespective of the water conditions. A key innovation is the disassembly and synthesis of multi-channel illuminance data. Specifically, we decompose the input image into its red, green, and blue frequencies, and then approximate the illuminance component within each channel. By independently manipulating and reconstructing these channel-specific illuminance maps, we can effectively address the non-uniform light scattering and absorption that are characteristic of underwater environments. This allows us to correct for the inherent color casts and haze that degrade image quality. To further refine the enhancement, we incorporate, advanced color correction methods such as image saliency exploration and white balance adjustment to compensate for color attenuation caused by light absorption at different depths. These techniques effectively restore lost colors and enhance contrast, thereby improving image clarity and sharpness.  This is helpful in the field of engineering and also forms the foundation for further exploring methods of improving images captured underwater. Investigational outcomes exhibit that the intended method ominously augments image eminence, making it highly effective for underwater detection and exploration tasks, offering an innovative solution for hazy images in various conditions and advancing underwater monitoring and exploration technologies.

[...] Read more.
Development of User Story and Design Thinking Integration Teaching Model for Software Engineering Education

By Muhammad Ihsan Zul Suhaila Mohd. Yasin Dadang Syarif Sihabudin Sahid

DOI: https://doi.org/10.5815/ijitcs.2026.02.04, Pub. Date: 8 Apr. 2026

User stories (US) play a vital role in requirement engineering, yet they often encounter challenges such as ambiguity, inefficiency, and low quality. Many Indonesian universities face difficulties in equipping students with practical skills essential for crafting effective US, despite efforts to align curricula with industry standards. Moreover, existing approaches that integrate design thinking (DT) into educational settings are limited, as they either do not adequately emphasize the US or do not yet address the unique needs of educational contexts. This study presents an innovative US-DT integrated teaching model to enhance students’ experience developing industry-relevant user stories. Utilizing an action research methodology, the study incorporates surveys and literature reviews to guide the model's development. The model was tested with a sample of Indonesian software engineering undergraduate students, focusing on evaluating their satisfaction levels through metrics such as perceived usefulness (PU), learning motivation (LM), learning satisfaction (LS), and perceived ease of use (PEOU). The impact of the model was assessed via the Mann-Whitney U Test and Cliff’s Delta effect size, comparing it against regular teaching methods. Results demonstrate significant improvements in PU, LM, and LS, indicating effectiveness, although PEOU remains a key limitation requiring further refinement. Future research should focus on improving PEOU by refining teaching strategies, optimizing session management, introducing preparatory workshops, and extending the model’s application to different student groups to validate and broaden its educational impact. The findings suggest that adapting US and DT from industry can notably enrich student learning experiences.

[...] Read more.
Representation of Dynamic Basic Block in Software Evolution Using Incidence Matrix

By Rajeeb S. Bal Jibendu K. Mantri

DOI: https://doi.org/10.5815/ijitcs.2026.02.05, Pub. Date: 8 Apr. 2026

Software evolution is a continuous process that transforms changing user requirements into improved software systems. Establishing a clear and well-structured development process is widely recognized as an effective means to enhance software maintainability, quality, and productivity. Tailoring software processes from existing process patterns and standards is essential for improving process performance, ensuring product quality, reducing development risks, and minimizing rework. Despite its importance, current research lacks a systematic and formally grounded method for tailoring software evolution processes. In this paper, we propose a structured approach based on Petri Net (PN) theory to address this limitation. There are four fundamental process constructs: sequence, concurrency, selection, and iteration are identified as basic building blocks for modeling software evolution processes. Using these constructs, four tailoring operations, namely adding, deleting, splitting, and merging, are formally defined. We study on the scalable process composition, matrix-based representations of Petri Nets (PNs) are employed. Incidence and related matrices provide a concise and mathematically tractable representation of both place/transition nets and restricted PNs, enabling the identification of essential structural properties of software processes. Also, we prove the reachability analysis and firing rules are utilized to derive a mathematical behavioral notation that captures binary relationships between input and output variables. This notation facilitates precise analysis of dynamic behavior for systematic software process tailoring.

[...] Read more.
An Ontology Driven Machine Learning Framework for Early Prediction in Children with Cerebral Palsy

By Rahma Haouas Zahwanie Lilia Cheniti-Belcadhi Saoussen Layouni Ghada El Khayat

DOI: https://doi.org/10.5815/ijitcs.2026.02.06, Pub. Date: 8 Apr. 2026

Cerebral palsy (CP) is a neurological disorder that affects 2-3 in every 1,000 births worldwide. Early prediction of severity is vital for optimizing therapeutic interventions. This study introduces OntoML-CP, a novel hybrid intelligence framework that combines inductive machine learning with deductive ontology-based reasoning to predict Gross Motor Function Classification System (GMFCS) levels in children with CP. We present a hybrid architecture combining semantic features from a CP ontology and clinical data for machine learning, using ontological reasoning to refine predictions and improve clinical validity and interpretability. The clinical ontology built using OWL captures the relationships between symptoms of cerebral palsy, developmental disorders, and motor functions, enriched with clinical knowledge and FOAF to represent key stakeholders like patients, parents, and therapists. Using a synthetic dataset of 1,695 children with CP, generated by physical medicine and rehabilitation specialists based on real clinical cases and validated through expert review, we address demographic diversity and missing data through preprocessing techniques to correct class imbalance during model evaluation and selection. Seven supervised algorithms were evaluated, among which Random Forest and Gradient Boosting models achieved superior performance (accuracy: 85% and 83%), when augmented with our ontological framework. The models showed consistent performance across all GMFCS levels with macro-averaged F1-scores of 0.81 and 0.79, respectively, and maintained high sensitivity for severe cases (levels 4-5), significantly outperforming baseline models. The semantic layer enhances predictions with logical explanations and presents them through SPARQL queries and intuitive visual formats designed for healthcare professionals. Our ontology-driven approach provides medicine with not only accurate predictions but also context-aware, clinically interpretable explanations that support informed decisions and enable personalized, actionable CP severity predictions.

[...] Read more.
Sensor Data Fusion in Healthcare Monitoring System with Appropriate Rule-based Model for Error Reduction

By Vivek Sharma S. Mahesh Kaluti

DOI: https://doi.org/10.5815/ijitcs.2026.02.07, Pub. Date: 8 Apr. 2026

Healthcare monitoring System (HMS) is involved in the continuous and periodic evaluation of the patient or individuals. The HMS uses multiple sensors to monitor the health status of patients. However, the conventional methods are subjected to errors in the computation of data in the environment. Hence, this paper proposed an Optimized Rule-based Sugeno Fuzzy Hidden Markov Model Stacked Deep Learning (ORSF-HMM-SDL) for error reduction in the environment. The proposed ORSF-HMM-SDL model uses the Associative rule-based for the computation of the health status of patients. The proposed model utilizes the optimized fuzzy system for the estimation of the features with the classification using stacked deep learning. The ORSF-HMM-SDL uses the Sugeno Fuzzy interface model to assess the health status of patients. The proposed ORSF-HMM-SDL estimates the Hidden Markov Model (HMM) for the computation of features for the error computation. With the estimated features the ORSF-HMM-SDL model computes deep learning for the classification with the stacked model.  data fused with the sensor are applied and classified with the deep learning model. The simulation results demonstrated the effectiveness of different fusion techniques—ORSF-HMM-SDL Fusion, Kalman Filter Fusion, Deep Learning (DL) Based Fusion, and SVM Based Fusion—in healthcare monitoring systems. The study evaluates their performance using metrics such as accuracy, sensitivity, specificity, precision, recall, F1-score, error rate, latency, and throughput. The Deep Learning Fusion method achieves the highest accuracy of 96.5% for heart disease detection, 94.2% for diabetes, and 95.8% for hypertension, with an overall accuracy of 98.3% for healthy individuals. The method also records a high F1-score of 98.3% for healthy individuals, 94.2% for heart disease, and 91.7% for hypertension. In comparison, ORSF-HMM-SDL Fusion shows strong performance with an overall accuracy of 94.8%, sensitivity of 92.3%, and specificity of 96.1%, along with a low error rate of 5.2%. The Kalman Filter Fusion and SVM Based Fusion methods, while effective, show slightly lower performance across most metrics, with Kalman Filter achieving 93.0% accuracy and Deep Learning showing superior performance with 700 data points/min throughput. These findings demonstrate that while deep learning offers the highest overall performance.

[...] Read more.
PINNs for Stochastic Dynamics: Modeling Brownian Motion via Verlet Integration

By Yulison Herry Julian Evan Jeremia Oktavian Ferry Faizal

DOI: https://doi.org/10.5815/ijitcs.2026.02.08, Pub. Date: 8 Apr. 2026

This study presents a Physics-Informed Neural Network (PINN) framework for modeling stochastic systems like Brownian motion, designed to overcome critical challenges in physical consistency and numerical stability that affect classical solvers and standard data-driven models. Traditional numerical methods often struggle with high-dimensional spaces or sparse data, while many machine learning approaches fail to enforce fundamental physical laws. To address this, our proposed PINN architecture integrates a multi-component loss function that explicitly enforces the Fokker-Planck equation, which describes the system’s governing physics, alongside boundary conditions and a global probability conservation law. This physics-informed approach is anchored by high-fidelity training data generated from Verlet-integrated trajectories of the underlying Langevin dynamics. We validate our model against the analytical solution for one-dimensional Brownian motion, demonstrating its ability to accurately recover the true probability density function (PDF). Rigorous comparisons using statistical metrics show superior accuracy over a canonical data-driven operator learning model, DeepONet. Specifically, our PINN achieves a relative L2 error of 5.66% and maintains probability normalization within a 0.03% tolerance, significantly outperforming DeepONet’s 32.46% error and 3.2% probability deviation. Furthermore, a recursive error-bounding technique provides quantifiable confidence in the model’s predictions. While validated in a low-dimensional system, our framework demonstrates a promising and robust methodology for problems in fields like soft matter physics and financial modeling, where both physical consistency and data-driven flexibility are crucial. We also provide a transparent analysis of the model’s computational trade-offs, positioning this physics-informed approach as a reliable tool for complex scientific applications.

[...] Read more.
Lightweight and Explainable Neural Models for Multilingual Movie Script Certification

By Pratik N. Kalamkar Prasadu Peddi Yogesh K. Sharma

DOI: https://doi.org/10.5815/ijitcs.2026.02.09, Pub. Date: 8 Apr. 2026

Automated film certification remains an underexplored regulatory challenge, requiring scalable yet transparent models capable of handling full-length multilingual scripts. This paper presents a unified framework that delivers lightweight, explainable, and calibrated neural classifiers for multilingual movie script certification in English, Hindi, and Marathi. Unlike prior studies that operate on short snippets or monolingual text, our approach models entire scripts through chunk-level transformer encoding, knowledge distillation, and file-level temperature calibration, coupled with explainability-guided rule mapping for interpretable decision refinement. The proposed pipeline systematically integrates six stages—baselines, teacher modeling, distillation, calibration, explainability, and rule enrichment—yielding a compact yet trustworthy system. Experiments show that the distilled students retain over 85% of teacher accuracy while being 3× smaller, and temperature scaling substantially improves reliability (English Expected Calibration Error 0.303→0.086, Brier 0.684→0.540). Faithfulness analysis using deletion Area Under Curve confirms interpretable token attributions (0.157, 0.239, and 0.258 for English, Hindi, and Marathi respectively). Moreover, rule integration improves accuracy (English 0.581→0.587) while offering human-auditable rationales. All models are deployment-feasible, exported to ONNX/TorchScript with 3.5× compression (545 MB→150 MB) and no performance loss. Together, these results establish a reproducible, end-to-end pipeline that works multilingual long-document modeling, calibration, and interpretability for film certification—advancing trustworthy Artificial Intelligence in regulatory Natural Language Processing. To our knowledge, this is the first work to build a unified, multilingual, and explainable pipeline for movie script certification using full-length scripts across MPAA and CBFC regulatory settings.

[...] Read more.
Hepatitis C Diagnosis using Supervised Machine Learning Algorithms and Ensemble Learning Techniques

By Karthika Natarajan Koteswara Rao Makkena

DOI: https://doi.org/10.5815/ijitcs.2026.02.10, Pub. Date: 8 Apr. 2026

Hepatitis, a severe and highly impactful disease, poses significant challenges for healthcare systems, including limited diagnostic resources, delayed detection, and inadequate treatment infrastructure. This work addresses these issues by developing a machine-learning predictive system to classify hepatitis severity. By employing Logistic Regression, Random Forest, SVM, KNN, and ensemble techniques such as AdaBoost, CatBoost, and Gradient Boosting, the system enhances early detection and severity assessment. The issue of class imbalance was addressed using ADASYN and SMOTE methods applied to two separate datasets. For Dataset 1, following the use of the ADASYN technique, the achieved accuracies were 88.11% for Logistic Regression, 98.92% for Random Forest, 97.30% for AdaBoost, and 96.22% for Gradient Boosting. When SMOTE was employed on Dataset 1, Random Forest and Gradient Boosting reached accuracies of 98.38% and 96.76%, respectively. In the case of Dataset 2, AdaBoost achieved an accuracy of 93.75% after applying both ADASYN and SMOTE. These models analyze clinical data to deliver accurate, timely predictions, reducing the burden on resource-constrained healthcare systems. Ensemble methods enhance model robustness and accuracy, supporting improved decision-making and efficient resource allocation. Furthermore, SHAP offers global explanations of feature importance and force plots for local interpretations, while LIME increases the interpretability of results from black-box models, facilitating effective hepatitis management. Future work will focus on integrating interoperability standards, such as HL7 FHIR, to enable real-time data exchange, facilitating seamless risk assessment and clinical decision support within healthcare workflows.

[...] Read more.
Accident Detection and Estimation of Vehicle Speed and Count by Type in Road CCTV Images Using Machine Vision

By I. Kadek Rai Pramana I. Putu Agung Bayupati Gusti Made Arya Sasmita Ngoc Le

DOI: https://doi.org/10.5815/ijitcs.2026.02.11, Pub. Date: 8 Apr. 2026

This study presents an integrated traffic monitoring system for accident detection, vehicle counting by type, and vehicle speed estimation using roadside Closed-Circuit Television (CCTV) footage and machine vision based on the YOLOv11 architecture. The proposed methodology comprises data collection from heterogeneous sources, data preprocessing and augmentation, model fine-tuning on a custom Vehicle–Accident dataset, system deployment through a web-based application, and real-world evaluation. The YOLOv11 models were optimized to detect multiple vehicle categories and clearly defined accident classes under real traffic conditions. Experimental results indicate that the YOLOv11 Large (l) model achieves superior detection performance, with 81.8% precision, 75.8% recall, 82.1% mAP50, and 53.3% mAP50–95. Real-world testing further confirms its effectiveness, yielding an object detection accuracy of 99.24% and low speed estimation errors, with Mean Absolute Percentage Error (MAPE) of 3.56% for video-based evaluation and 5.54% for real-time evaluation. In contrast, the YOLOv11 Nano (n) model offers faster inference and lower computational requirements but exhibits reduced robustness in complex accident scenarios. The trained models are deployed in an interactive web application supporting image, video, and real-time inputs, enabling practical traffic monitoring and decision support. Overall, the YOLOv11l-Vehicle-Accident model is identified as the most suitable configuration for accuracy-critical traffic management systems, while Nano variants are better suited for resource-constrained deployments.

[...] Read more.
Mobile OTP Authentication Protocol Design and Implementation for Local Federated Clients to Federated Central Server via MQTT

By Narendra Babu Pamula Ajoy Kumar Khan Arnidam Sarkar

DOI: https://doi.org/10.5815/ijitcs.2026.02.12, Pub. Date: 8 Apr. 2026

Strong and effective authentication methods are more important than ever in the ever-changing field of cybersecurity. In this work, a Mobile One-Time Password (OTP) Authentication Protocol designed for local federated clients utilizing the Message Queuing Telemetry Transport (MQTT) protocol to communicate with a federated central server is designed and implemented. This protocol strengthens the security foundation of federated systems by ensuring the safe and dependable delivery of OTPs while utilizing the lightweight and effective characteristics of MQTT. The suggested protocol tackles the scalability, security, and latency issues that come with federated setups. We show how the protocol can effectively mitigate possible security threats, like replay attacks and illegal access, while maintaining user convenience through a thorough analysis and implementation. Our protocol strikes a balance between security and performance, according to experimental results, which makes it a workable answer for modern federated authentication requirements.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data

By Saket Sultania Rohit Sonawane Prashasti Kanikar

DOI: https://doi.org/10.5815/ijitcs.2025.01.01, Pub. Date: 8 Feb. 2025

Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.

[...] Read more.
PDF Marksheet Generator

By Srushti Shimpi Sanket Mandare Tyagraj Sonawane Aman Trivedi K. T. V. Reddy

DOI: https://doi.org/10.5815/ijitcs.2014.11.05, Pub. Date: 8 Oct. 2014

The Marksheet Generator is flexible for generating progress mark sheet of students. This system is mainly based in the database technology and the credit based grading system (CBGS). The system is targeted to small enterprises, schools, colleges and universities. It can produce sophisticated ready-to-use mark sheet, which could be created and will be ready to print. The development of a marksheet and gadget sheet is focusing at describing tables with columns/rows and sub-column sub-rows, rules of data selection and summarizing for report, particular table or column/row, and formatting the report in destination document. The adjustable data interface will be popular data sources (SQL Server) and report destinations (PDF file). Marksheet generation system can be used in universities to automate the distribution of digitally verifiable mark-sheets of students. The system accesses the students’ exam information from the university database and generates the gadget-sheet Gadget sheet keeps the track of student information in properly listed manner. The project aims at developing a marksheet generation system which can be used in universities to automate the distribution of digitally verifiable student result mark sheets. The system accesses the students’ results information from the institute student database and generates the mark sheets in Portable Document Format which is tamper proof which provides the authenticity of the document. Authenticity of the document can also be verified easily.

[...] Read more.
Cloud Computing: A review of the Concepts and Deployment Models

By Tinankoria Diaby Babak Bashari Rad

DOI: https://doi.org/10.5815/ijitcs.2017.06.07, Pub. Date: 8 Jun. 2017

This paper presents a selected short review on Cloud Computing by explaining its evolution, history, and definition of cloud computing. Cloud computing is not a brand-new technology, but today it is one of the most emerging technology due to its powerful and important force of change the manner data and services are managed. This paper does not only contain the evolution, history, and definition of cloud computing, but it also presents the characteristics, the service models, deployment models and roots of the cloud.

[...] Read more.
A Systematic Review of Natural Language Processing in Healthcare

By Olaronke G. Iroju Janet O. Olaleke

DOI: https://doi.org/10.5815/ijitcs.2015.08.07, Pub. Date: 8 Jul. 2015

The healthcare system is a knowledge driven industry which consists of vast and growing volumes of narrative information obtained from discharge summaries/reports, physicians case notes, pathologists as well as radiologists reports. This information is usually stored in unstructured and non-standardized formats in electronic healthcare systems which make it difficult for the systems to understand the information contents of the narrative information. Thus, the access to valuable and meaningful healthcare information for decision making is a challenge. Nevertheless, Natural Language Processing (NLP) techniques have been used to structure narrative information in healthcare. Thus, NLP techniques have the capability to capture unstructured healthcare information, analyze its grammatical structure, determine the meaning of the information and translate the information so that it can be easily understood by the electronic healthcare systems. Consequently, NLP techniques reduce cost as well as improve the quality of healthcare. It is therefore against this background that this paper reviews the NLP techniques used in healthcare, their applications as well as their limitations.

[...] Read more.
Markov Models Applications in Natural Language Processing: A Survey

By Talal Almutiri Farrukh Nadeem

DOI: https://doi.org/10.5815/ijitcs.2022.02.01, Pub. Date: 8 Apr. 2022

Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state.  The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data

By Saket Sultania Rohit Sonawane Prashasti Kanikar

DOI: https://doi.org/10.5815/ijitcs.2025.01.01, Pub. Date: 8 Feb. 2025

Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Early Formalization of AI-tools Usage in Software Engineering in Europe: Study of 2023

By Denis S. Pashchenko

DOI: https://doi.org/10.5815/ijitcs.2023.06.03, Pub. Date: 8 Dec. 2023

This scientific article presents the results of a study focused on the current practices and future prospects of AI-tools usage, specifically large language models (LLMs), in software development (SD) processes within European IT companies. The Pan-European study covers 35 SD teams from all regions of Europe and consists of three sections: the first section explores the current adoption of AI-tools in software production, the second section addresses common challenges in LLMs implementation, and the third section provides a forecast of the tech future in AI-tools development for SD.
The study reveals that AI-tools, particularly LLMs, have gained popularity and approbation in European IT companies for tasks related to software design and construction, coding, and software documentation. However, their usage for business and system analysis remains limited. Nevertheless, challenges such as resource constraints and organizational resistance are evident.
The article also highlights the potential of AI-tools in the software development process, such as automating routine operations, speeding up work processes, and enhancing software product excellence. Moreover, the research examines the transformation of IT paradigms driven by AI-tools, leading to changes in the skill sets of software developers. Although the impact of LLMs on the software development industry is perceived as modest, experts anticipate significant changes in the next 10 years, including AI-tools integration into advanced IDEs, software project management systems, and product management tools.
Ethical concerns about data ownership, information security and legal aspects of AI-tools usage are also discussed, with experts emphasizing the need for legal formalization and regulation in the AI domain. Overall, the study highlights the growing importance and potential of AI-tools in software development, as well as the need for careful consideration of challenges and ethical implications to fully leverage their benefits.

[...] Read more.
Detecting and Preventing Common Web Application Vulnerabilities: A Comprehensive Approach

By Najla Odeh Sherin Hijazi

DOI: https://doi.org/10.5815/ijitcs.2023.03.03, Pub. Date: 8 Jun. 2023

Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.

[...] Read more.
Cloud Computing: A review of the Concepts and Deployment Models

By Tinankoria Diaby Babak Bashari Rad

DOI: https://doi.org/10.5815/ijitcs.2017.06.07, Pub. Date: 8 Jun. 2017

This paper presents a selected short review on Cloud Computing by explaining its evolution, history, and definition of cloud computing. Cloud computing is not a brand-new technology, but today it is one of the most emerging technology due to its powerful and important force of change the manner data and services are managed. This paper does not only contain the evolution, history, and definition of cloud computing, but it also presents the characteristics, the service models, deployment models and roots of the cloud.

[...] Read more.